
Latent Dynamics Learning for Time-Continuous Reduced Order Models of Parametrized PDEs
Please login to view abstract download link
The recent adoption of data-driven deep learning (DL) –based approaches in the context of
reduced order modeling has allowed to overcome many of the limitations affecting traditional
reduced basis methods. These techniques aim at addressing the two main tasks involved in the
construction of ROMs within a time-dependent parameterized context: nonlinear
dimensionality reduction by means of autoencoders (AEs), and modeling of the latent
dynamics of the resulting reduced representation, either via regressive, autoregressive or
recurrent approaches. However, both traditional and DL-based ROMs are intrinsically tied to
the temporal discretization employed during the offline training phase. This aspect represents
a critical constraint, restricting ROMs' ability to adapt to different temporal resolutions during the online phase. Therefore, it is often necessary to rebuild the reduced basis or undergo
additional fine-tuning stages when a novel temporal discretization is encountered, thereby
entailing increased offline computational costs. To this end, a central aspect in reduced order
modeling resides in whether the ROM solution is a meaningful time-continuous
approximation of the FOM, in which case it could be possible to query the learned ROM at
any given time by retaining a prescribed level of accuracy.
To address this problem, we introduce the latent dynamics model (LDM) mathematical
framework, which enables us to: (1) devise a ROM architecture in a continuous setting, (2)
consecutively analyze the impact of employing a numerical integration scheme for the solution
of the latent dynamics, and (3) address its approximation capabilities in a learnable setting,
rigorously demonstrating that it is able to provide an accurate and time-continuous
approximation of the FOM solution.
The resulting reduced order modeling framework is characterized in a DL-context by relying
on the AE-Neural ODE architecture, for which different architectural choices are proposed,
aimed at enhancing interpretability and efficiency. The numerical experiments, performed on
high-dimensional dynamical systems arising from the semi-discretization of parameterized
nonlinear time-dependent PDEs, demonstrate that the framework exhibits a time-continuous
approximation property, enabling zero-shot generalization to finer temporal discretizations
than the one employed at training time.